Smooth Optimization with Approximate Gradient

نویسنده

  • Alexandre d'Aspremont
چکیده

We show that the optimal complexity of Nesterov’s smooth first-order optimization algorithm is preserved when the gradient is only computed up to a small, uniformly bounded error. In applications of this method to semidefinite programs, this often means computing only a few dominant eigenvalues of the current iterate instead of a full matrix exponential, which significantly reduces the method’s computational cost. This also allows sparse problems to be solved efficiently using maximum eigenvalue packages such as ARPACK.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Convergence Analysis of the Approximate Proximal Splitting Method for Non-Smooth Convex Optimization

Consider a class of convex minimization problems for which the objective function is the sum of a smooth convex function and a non-smooth convex regularity term. This class of problems includes several popular applications such as compressive sensing and sparse group LASSO. In this thesis, we introduce a general class of approximate proximal splitting (APS) methods for solving such minimization...

متن کامل

On sequential optimality conditions for smooth constrained optimization

Sequential optimality conditions provide adequate theoretical tools to justify stopping criteria for nonlinear programming solvers. Approximate KKT and Approximate Gradient Projection conditions are analyzed in this work. These conditions are not necessarily equivalent. Implications between different conditions and counter-examples will be shown. Algorithmic consequences will be discussed.

متن کامل

Gradient descent algorithms for quantile regression with smooth approximation

Gradient based optimization methods often converge quickly to a local optimum. However, the check loss function used by quantile regression model is not everywhere differentiable, which prevents the gradient based optimization methods from being applicable. As such, this paper introduces a smooth function to approximate the check loss function so that the gradient based optimization methods cou...

متن کامل

Approximated Function Based Spectral Gradient Algorithm for Sparse Signal Recovery

Numerical algorithms for the l0-norm regularized non-smooth non-convex minimization problems have recently became a topic of great interest within signal processing, compressive sensing, statistics, and machine learning. Nevertheless, the l0norm makes the problem combinatorial and generally computationally intractable. In this paper, we construct a new surrogate function to approximate l0-norm ...

متن کامل

Smoothing a Program Soundly and Robustly

We study the foundations of smooth interpretation, a recentlyproposed program approximation scheme that facilitates the use of local numerical search techniques (e.g., gradient descent) in program analysis and synthesis. While the popular techniques for local optimization works well only on relatively smooth functions, functions encoded by real-world programs are infested with discontinuities a...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • SIAM Journal on Optimization

دوره 19  شماره 

صفحات  -

تاریخ انتشار 2008